• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

Çмú´ëȸ ÇÁ·Î½Ãµù

Ȩ Ȩ > ¿¬±¸¹®Çå > Çмú´ëȸ ÇÁ·Î½Ãµù > Çѱ¹Á¤º¸°úÇÐȸ Çмú´ëȸ > KCC 2021

KCC 2021

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) ´ëÁ¶ ¼Õ½ÇÀÌ VAEÀÇ ÀáÀç º¯¼ö¿¡ ¹ÌÄ¡´Â ¿µÇâ
¿µ¹®Á¦¸ñ(English Title) Effect of Contrastive Loss on Latent Space of Variational Autoencoder
ÀúÀÚ(Author) ÀÌÇö¼·   ÃÖÈñ¿­   Hyunsub Lee   Heeyoul Choi  
¿ø¹®¼ö·Ïó(Citation) VOL 48 NO. 01 PP. 0633 ~ 0635 (2021. 06)
Çѱ۳»¿ë
(Korean Abstract)
¿µ¹®³»¿ë
(English Abstract)
Variational autoencoder (VAE) is a generative model that produces samples from latent space. VAE is known to be easy to be trained and can learn data representation compared to other generative models such as generative adversarial networks (GAN) and autoregressive models. However, when VAE trained on MNIST generates samples from latent space, we sometimes get ambiguous images that seem to lie between different classes. We think that it is because the distance between the clusters on the latent space is too close to draw samples from a specific class. In this paper, we break through this problem using contrastive learning. We can train separable clusters on latent space each corresponding to one class and find that separated clusters result in high performance in classification tasks. Moreover, we can capture that contrastive loss helps VAE learn more disentangled representation.
Å°¿öµå(Keyword)
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå